Designing in VR

By Lawrence Cox

A Designer's Guide To VR

When the head of Oculus, Cory Ondrejka, was asked when Virtual Reality headsets would make it into the average American home, he replied, ‘As soon as we can get them there.’ VR’s potential applications in gaming, immersive experiences and learning will be the ones that get all the credit for that development, but once inside the average home, how we use VR can start to broaden. This includes day-to-day web interactions – those that have been conducted via smartphones for the past half-decade. Yet while the web is filled with insights and guidelines around the application of VR in gaming and immersive experiences, very little research has been done into the impact it may have on regular web experiences. So we decided to right the ship by rebuilding our agency site for VR and exploring our findings in a series of articles.

It’s no surprise that, as an industry, we’re still in the early stages of designing for native Virtual Reality experiences. It’s only this year that the headsets have begun to break through to consumer adoption. With that in mind, here we share a series of articles that showcase our learnings of this new medium, acquired while working on our agency projects.

From screen sizes to viewable areas

Throughout the 1970s and 80s, we predominantly designed for newspapers and billboards. The 90s saw the rise of digital design, and in the early 2000s, that digital design was transplanted onto our mobile screens. So we’ve been conditioned to design within the parameters of specific 2D areas, no matter the platform or medium. Virtual Reality bucks that trend and puts designers in an existential panic. Rather than screen sizes, we’re designing for viewable areas. And the 360 degrees of vision means we’re spoilt for choice when it comes to content positioning and layout. We’ve moved from 2D to 3D. So without the parameters of two dimensions to guide interface design, the first question to ask is: Where does a designer put things?

When building AnalogFolk’s website-in-a-headset prototype, we optimised for a sit-down experience, as that’s how most of us enjoy longer-term web consumption. This means we can immediately start disregarding areas of our 360 degree canvas – those areas that aren’t visible and therefore not usable (for the most part). The space behind the person immediately becomes, well, behind them. Naturally, it’s impractical to have the user constantly looking around, as this can lead to a lack of orientation within the experience and of, course, quickly becomes tedious and tiring. Therefore, we need to work in the user’s field of vision, FOV for short. The specific FOV is headset-dependent, so it fluctuates from one to another, but the average tends to be between 90 and 110 degrees. As we were using the Oculus Rift (Development Kit 1), our FOV was 110 degrees.

VR expert Alex Chu discovered that, on average, you can comfortably rotate your head 30 degrees to the left or right to a maximum of 55 degrees, and you can comfortably look up 20 degrees to a maximum of 60 degrees. Looking down, 7 you can comfortably go 12 degrees, to a maximum of 40 degrees. When you couple this with your FOV (from a seated position), you can plot the areas people can see comfortably and where they’ll strain.

vr-design_fov_7

With the available dimensions laid out, we can start to allocate suitable areas for content and rule out dead zones. Dead zones are the areas outside of these dimensions, where integral content should not be placed. However, that doesn’t make them completely redundant. Because its a novel, immersive experience, people are curious when they put on a VR headset. In user testing, they turned around and looked at things, exploring the new reality. So it’s beneficial to have something fun – an Easter egg, for example – that rewards their curiosity and makes the best use of VR’s 360 capabilities.

It’ll come as no surprise that the zones deemed comfortable should house the main content. Areas that are accessible but strenuous (that no man’s land between dead zones and suitable areas) can contain content, but the user should only have to interact with them briefly.

Plane thinking

Now we’ve narrowed our 360 degrees into comfortable zones for digesting content, we need to start thinking about depth. We’ve added another dimension, placing a user into a whole new reality, so how do we design for three dimensions?

Applying basic film, painting and photography principles, we settled on three planes within our interface: a foreground, midground and background. Each of those planes can have multiple canvases sitting at different depths within a plane to enhance the 3D feeling. Alex Chu’s research into depth perception found the best distance to place content to ensure you get a strong 3D effect is between 1 and 20 metres. Anything closer and the user becomes cross-eyed; anything further away becomes flat.

We tested a few depths of planes in our prototype, settling on the foreground at 3 metres away from the user, the midground at 6 metres (with slight variations of depth between different canvases), and the background at 20 metres, but this could be set further back depending on what type of background was selected.

vr-design_depth_lines

Foreground

We set the foreground plane as 10:3 (1200px by 360px). As the closest plane to the user, sitting 3 metres away, the other planes must be seen behind it. So we anchored it to the bottom of the FOV. That omnipresence in the user’s initial FOV made it the ideal area for navigational elements.

The plane itself can be divided into three canvases – a Primary (central) canvas and two Secondary ones – with the navigational elements spread across them. (Read more about navigational principles of VR in the next chapter.) These navigational elements spread across the three canvases should correspond with the midground content zones, as I’ll discuss in a moment.

The peripheral Secondary canvases can be tilted (50 degrees) around the user to maintain a consistent depth measurement from the eye and set up an immersive experience (as seen below), rather than a feeling of looking at a flat surface.

vr-design_art_fore-3

Midground

After playing with a few plane sizes, we set the midground at 2:1 (1800px by 900px). In the interests of a simple interface, we split the plane into three canvases sitting 6 metres from the user, though more canvases could be added to give depth and complexity to the design.

As with the foreground navigation hub, the three canvases are made up of a Primary one flanked by two Secondary canvases. The Primary canvas dominates the user’s initial FOV, so is used to showcase main content. As the user will have to turn their head to view them fully, the Secondary canvases house the secondary content (corresponding with the navigational elements found in the matching foreground canvas). These peripheral canvases can once again be tilted forwards, so content is easily digestible and the experience feels wrapped around the user.

vr_design_midground-2Background

You have a few options when it comes to the background, but the obvious qualities of VR are put to best use by wrapping the user in a background sphere, whether that’s a 360 video or image (with the same aspect ratio as the midground – 2:1; we used 3600px by 1800px). Read up more on how to build that sphere and implement a 360 video in following articles. We had our background sphere sitting 15 metres from the user, but it could be anywhere up to 20 (particularly if you use multiple canvas depths in your midground).

Wherever you set your background and whether it’s spherical or flat-pack, its design has a sizeable impact on the VR interface. In fact, it ultimately dictates the overall user experience. Using a spherical background or skybox of an endless landscape will blow away a user with the vastness of the experience. That’s great for how immersive it feels, and the sense of ‘presence’ (meaning a user feels connected to a reality that’s outside of their own physical body via technology). But at the same time, such a vast immersiveness can be detrimental to engagement with the rest of the content, navigation and design. In our user tests, we found some people spent more time looking around the landscape than interacting with your the design. Beauty can become a distraction.

On the other hand, placing someone in a confined space can make the experience feel underwhelming. So you need to find a balance and weigh up the pros and cons of both. The needs of your user and role of the experience you’re building can help to answer these questions. For example, are you helping people to complete a functional form or digest complex content? Then remove distractions wherever possible.

vr_design_background-2

Conclusion

As this platform is still in its infancy, insights and learnings will constantly evolve. Such is the nature of new technologies, the only way we can learn from them is iteration and development.

We’ll release our findings in the form of further articles, but to aid a new breed of VR designers in the here and now, we’ve created a sketch template from our current findings. To go along with this, we’re in the process of making a Unity VR toolkit, which will allow designers and UX to quickly create click-through prototypes by themselves. Stay tuned for the release.