Garou World
VR Multiplayer Publishing Platform | Interaction Design | UX Design
Garou is a publishing platform & marketplace for amazing VR experiences.
A place where people can physically go visit, interact and transact. A registry of VR experiences that uses real-world geography as the primary device for navigation.
Company
Garou, Inc.
Role
Interaction Designer / Developer
Awards
Epic Mega Grants 2020
2nd Prize of Verizon ‘Built on 5G’ Challenge 2020
technology
UE4 / Amazon Web Service / Perforce / 3DsMax / Figma
Time
Joined since 2019
Ideation and Task Planning
Design System
PAIN POINT
The design was not consistent or user friendly
RESPONSIBILITY
Referencing existing VR design guidelines
User-centered principle, tailored design for VR
Branded color palette
Detailed annotation for developers
In-game Web Browser
Menu Design
PAIN POINT
The Avatar needs to be scalable and easy to navigate after the introduction of new avatars
VERSION 01
Limited avatar choices, not scalable
Male/female avatars are mixed
VERSION 02
Belt design, more scalable
Click the center icon to change gender
Not intuitive enough
VERSION 03
Carousel design, easy to understand
Multiple ways to navigate
Preview 3D puppet
Can only select preset avatars
NEXT VERSION
Customization
More color choices
Gender/race friendly
iNTERACTION DESIGN
PAIN POINT
There needs to be something more than clicking or touching a flat-screen for art installations in Guggenheim
RESPONSIBILITY
Interact with 3D objects
Give visual hints/cues as guidance
Haptic feedback
Function Design
PAINT POINT
We want to design an annotation tool that allows users to paint in space and leave messages for friends.
RESPONSIBILITY
Visual feedback
Increased user engagement
Users spend more time in it
EARLY SKETCH
OTHER DESIGNS
Garou +
Companion Mobile App | UX Design
A way to link the VR and physical world easily
sitemap
wireframe
Spatial <-> Miro Integration
VR Collaboration Platform | Interaction | UX Design
Spatial is an online virtual meeting platform that provides immersive VR/AR experiences while Miro.com is an online visual collaboration platform that provides 2D screen-based experience.
Task
Design a UX flow for Miro.com < > Spatial Integration.
How do we export objects in Spatial (sticky notes, images, 3D models) to Miro?
How do users initiate import? per object? altogether? If so where does that button live in Spatial VR? or does it live on the Spatial website?
How do we represent those objects laid out in 3D in VR on a 2D plane on Miro? (and vise versa)
How do we import components in Miro into Spatial? Where do they go in Spatial?
User
CREATIVE WORKER
Brainstorming
Sketch
REMOTE TEAM MEMBER
Share files
Daily meeting
Review
BUSINESS CLIENT
Presentation
PAINT POINT
View only mode is not enough for a visual collaboration software
2D and 3D elements do not blend well
BRAINSTORMING AND SKETCH
DESIGN THOUGHT
USER-FRIENDLY
Easy interaction
Prevent misoperation
CONSISTENCY
Similar user flow
Same design language
HIGHLIGHT
EXPORT | 2D
IMPORT
EXPORT 3D
LIVE EDIT
ADD COMMENT
Craft
Inclusive VR Painting Solution | Inclusive UX Design
Designing for every person means embracing human diversity as it relates to the nuances in the way we interact with ourselves, each other, objects, and, environments. Designing for inclusivity opens up our experiences and reflects how people adapt to the world around them. What does this look like in the evolving deskless workplace? By saying deskless workplace, we mean people who aren't constrained to one traditional or conventional office setting.
Goal: Design a product, service, or solution to solve for exclusion in a deskless workplace.
Since we are all designers and we are in a program that involves various artists, we connected with the idea of solving for artists with limited hand mobility and trying to create content in VR. Our professor — Dana Karwas and Nick Katsivelos from Microsoft encouraged us to work in this area since not much has been done.
Team
Shimin Gu / Cherisha Agarwal / Joanna Yen / Pratik Jain /
Raksha Ravimohan / Srishti Kush
Role
Technical Director / UX Researcher
Technology
UE4 / Oculus Rift / Sketch
Time
2018
Showcase
Problem Statement
How might we build a multi-modal tool for people with limited mobility in their arms to create art in Virtual Reality?
Initial Research
Approaching this idea was not an easy task and it involved us talking to a lot of designers at IDM and understanding their workflow. We spoke to them about how they use software and the potential problems that people with limited hand mobility would run into. We were very fascinated by the idea of creating a multi-modal application that uses eye-tracking and voice commands to enable people to draw in VR. To dive deep into the technology and understand its use, we read a few papers listed here and got great insights from professors at NYU like Dana Karwas, Todd Bryant, Claire K Volpe, experts in UX research like Serap Yigit Elliot from Google, and Erica Wayne from Tobii. Every person we spoke to gave us more insights to tackle the challenge at hand from a user experience, VR technology, and accessibility perspective.
Interview
We had conversations with 115 individuals to validate our assumptions. We tested our prototype with potential users who face mobility issues. We also spoke to accessibility experts and disability centers to get a sense of how they would invest in our product to benefit their community.
PEOPLE WITH DISABILITIES
Paralysis
Cerebral Palsy
Multiple Sclerosis
Spinal Cord Injury
ORGANIZATIONS
Hospitals
Senior Homes
Disability Centers
Accessibility Research Centers
EXPERTS
VR Prototypers
UX Designers
3D Modellers
Accessibility Experts
Therapists
User Persona and journey
Low-fi Prototype
After extensive research, we decided to get our hands dirty and build our very first prototype. How do we make a prototype for an art tool in VR? We figured out that the best way was to convey our interaction is through a role play video. We did some rapid prototyping using some paper, sharpies, and clips to bring our interface to life.
Paper Prototype and User Test
Once we got insights and a whole lot of motivation from our users, we decided to give a structure to our interface. This time, we printed our tools on paper and stuck them together in an organized way. We used a laser pointer as the “eye tracker” and we were all ready for our second round of user testing. The whole process of prototyping for VR was very interesting and this process made us think clearly about details of the interactions and potential issues.
Our second round of user testing opened us to two kinds of scenarios based on prior experience. The users who had experience with using design software could complete the tasks well and pointed potential issues with the concept. Some of the issues were with figuring out the z-axis, feedback for interactions, adding stamps, etc. The users from the ADAPT community who did not have prior experience wanted the tools to be less ambiguous and some of the tools felt unnecessary.
Taking cues and learning from responses, we went to make the hi-fidelity prototype using the Unreal engine. Unreal is a good tool to build Virtual Reality content. To make a working prototype, there are several tasks we needed to accomplish which are as explained below:
High-fi Prototype
The final prototype was done using Unreal and had the functionality for drawing using head movements, changing the stroke, changing the environment and teleporting.
The most basic function of an eye-tracking painting tool in virtual reality was realized in our prototype by using head movements. The user could choose tools, paint, teleport, and change the environment with our final prototype but it was still constrained by some technical limitations. Few functions like erase, undo, redo could not be realized by Unreal for now but we hope we can make them work by using other software and hardware. We also hope to look into the technology to enable eye tracking and make it possible to select the tools using the gaze movements to draw. In our current prototype, the voice instructions are manually monitored. We would like to include this functionality as well to make a multi-modal solution for our users.
User Test
Next, we went out to meet potential users from the Adapt community. Adapt formerly known as — United Cerebral Palsy of New York City, is a community network is a leading non-profit organization in providing cutting-edge programs and services that improve the quality of life for people with disabilities. There we got to meet Carmen, Eileen, and Chris who were very interested in painting and loved the idea of drawing in VR. This kind of experience was new to them and to our surprise, we got good feedback from them. All three of them expressed interest in trying out a tool that enabled them to draw using eye movements and voice commands.
Little Einstein
Online Toy Store Experience | UX Design
Little Einstein is a new online retailer of curated and innovative learning kits for kids. It was formerly a beloved shop in Park Slope Brooklyn that sold all types of DIY kits (both analog and digital), but the storefront was too expensive and the shop closed in 2012. The owner wants to convert the store to online only and the owner (Alberta) now wants to focus her inventory on technology and electronics products geared towards kids ages 4 - 15.
Our goal is to help the shop owner Alberta create a functional online toy store as well as an online community where parents and kids can hang out and share their unique experiences.
Team
Shimin Gu / Yibing Qian / Fanrui Sun
Role
UX Researcher / Designer
Technology
Sketch / Figma
Time
2018
Sketching and brainstorming
We did two versions of brainstorming. We did the first version after getting the project and read all the requirements of the shop owner. This is a toy store. So we think of several concepts that can be used in the process of building this website.
Also, we summed up several questions that might be useful in the process of contextual inquiry. We divided the people surveyed into three types: shoppers(children, adults), shopkeepers.
After we finished our field trip to several toy stores, we got some new insights of operating a toy store, so we did the second round of brainstorming.
1st brainstorming
Questions
2nd brainstroming
Contextual Inquiry
We conducted the contextual inquiry in two stores. The first store we went to is Toy Tokyo, a store sited in East Village. The second toy store we visited is Toy R Us, a very large one around Times Square.
See questions and the summary of the contextual inquiry Here
The interview with shopkeepers in Toy Tokyo
The play area in Toy R Us (Times Square)
Item card sorting
We collected items from the websites offered and amazon, while we were choosing an item, we made sure that we picked up items from diverse categories, and then we did the card sorting in two ways.
In the first stage, we did it by type. We divided them into robots, art, electronics, math & science, and construction. In the second stage, we did it by age. At first, we did it by 4-7, 7-10, 10-15. To average each category, we changed it to 4+, 7+, and 9+.
Card sorting I: By type
Card Sorting II: By age
sitemap
user personas
Evelyn goes directly what she wants to buy for her grandson as his birthday present. She’s concerned about the quality and safety problems of the toy, so she reads customer reviews carefully. Since she is not very familiar with online shopping, she asks some help from customer service.
Amanda is a very busy mom who doesn’t have much time shopping for toys. As a matter of fact, she uses the personal recommendation function on the website to choose a toy her son may like. She has a promotion code so she gets a little discount on this order. She also finds the ‘buy it again’ function helpful to save her time.
Sussie usually goes to the website to see its events and sales information, or searches for a toy her friends have. Since she can’t pay on her own, she saves the toys in her wishlist, waiting for her parents to buy for her. She is also a frequent user of the community.
user flow
Our solution
Clear categories and a multi-filter system.
Theme events and paired books.
Membership, promotion code and sales.
Personal recommendation based on big data.
Live chat for personal customer service(including customized recommendations).
Community forum which is used for sharing experience of playing toys.
Paper Prototype
User Test
KEY TASK:
Task 1: Find a toy for your 8-year-old son.
Task 2: Join a group
Task 3: View your account
Task 4: Seek customer service
PROBLEMS
Can’t see all the products in general
Can’t choose from several categories(filter)?
Can’t join in group detail page
Want to see items at the same time as live chat
Go to live chat on item page
No wishlist
Customer service should position at last on the homepage menu
Buy now<-> Add to cart
Walk U Dog
Social Platform | UX Design
Since a lot of young people would like to spend money on their dogs but they are too busy to walk them. Our team would like to provide a dog walking service called "Walk U Dog" to help them. We hope to build a both functional and social platform where people could find trustworthy dog walkers and also a place to share their daily lives with their dogs.
We built a Wechat social platform to push related articles and attract potential fans. We also built an app prototype and a concept demo.