In the first part of a new series, Immersive Experience Specialist Jed Ashforth suggests that expectations are the most vital key to understanding how to design for users in VR experiences.
The most important lesson any Designer can learn when approaching a VR project is that achieving a state of deep immersion and presence always comes as a result of meeting, and exceeding your user’s expectations. In fact, expectations are an intrinsic part of the magic trick that VR conjures, and they seep into every aspect of the design and development of an experience, even if their importance may not be explicitly obvious at first glance.
When tutoring new VR teams, we have always led with a simple maxim: Always Give The User What They Expect. For Enterprise developers this makes sense straight away; Enterprise applications for immersive technologies are always focused on streamlining the workflow and adding value to established pipelines and processes. Developers working on Enterprise apps actively seek to understand the existing workflows of their clients and see where they can offer more elegant and useful solutions as replacements for part or all of an existing practice. As a design approach, they will actively avoid impacts on productivity where possible, so they design software to be as intuitive and familiar as possible to their users. And intuition and familiarity come from past experience and established practices that are already familiar to their users. In other words, they’re designing their apps to meet their users’ expectations as successfully as possible.
For creative developers working in immersive technology, though, this often comes as something of a surprise. In their world, they seek to give their audiences the unexpected as much as possible. Narrative twists and turns are gold-dust to storytellers. Innovation is the lifeblood of videogames. Why does VR mean we should suddenly give users what they expect?
Unlike film, literature, comics, theatre, TV or videogames, our new immersive technologies haven’t yet established their identities or identified which structures work best for their audiences. When we view a show or a movie, we do so from atop a solid and sophisticated framework of how those mediums work; one that we have built over a lifetime of interacting with TV and movies – even if we don’t realise it.
There are a million opinions on what the best storytelling structure might be, but storytelling as an art form has matured over many centuries and we now understand a great deal about how to create characters and narratives that resonate with audiences. The various recipes of screenwriting don’t give screenwriters a fool-proof formula for success, but they have gathered enough wisdom from them that they can at least predict what proportions of ingredients, served in what order, might taste good to audiences. The artistry of screenwriting isn’t often evidenced in unusual structures; rather it is seen far more commonly in the ways a work can twist and fuse existing ingredients in ways audiences don’t expect or haven’t experienced before.
Every story we experience is placed in a context formed from our past experiences with stories, and even if we don’t always consciously recognise the rules of the medium, we’re all experts in consuming stories through movies and TV, and we can all sense it when a character arc isn’t satisfying or when a story feels rushed or unfinished. We’ve seen other movies and TV that was more satisfying to us, and that gives us context for what we should expect.
Interactive VR is still a very young medium, and we’re collectively only just figuring out what structures will work for our audiences, especially where the creative focus is the aim. We’re a medium without a context to call our own just yet. As such, when immersed into VR, new users look for a frame of reference of what they’re experiencing and almost always tend to frame the experience in the context of real life. Our hands and head move through virtual space exactly as we expect them to. The world (usually!) obeys the same rules we’d expect in real life – a flat horizon, 9.8 m/s2 gravity that points down. Disobeying such fundamentals can be done in VR of course, and one day when our expectations are ‘this is VR and bending reality is nothing unusual’ this will sit comfortably with our expectations. But right now it is unusual, and if you want to deliver a widely comfortable and understandable VR experience that doesn’t alienate users, it’s generally wise not to upset reality too much.
As more and more aspects of the virtual world appear to adhere to real-life properties, users will naturally expect the other things they encounter to behave like real life and extend those expectations across the simulation. In essence, in trying to present itself as real, the virtual experience sets our expectations that it will behave with real-world properties. This is the reason we might drop our controllers on a virtual desk that doesn’t exist or walk around a virtual table that we could just walk through. In the absence of evidence to the contrary, our brains willfully comply with the illusion; efficiently processing our reality moment-to-moment means assuming what we experience is valid until we are challenged by a set of inputs that seem to contradict our expectations. Interacting with the real world has always had a set of firm rules that we don’t question – we always walk around tables, we can always confidently place an object on another object. Whenever we’re immersed in VR and forget ourselves, it’s easy to find ourselves being led by our real-world expectations, because we haven’t yet built equivalent expectations and context for our brains to frame this moment in a more appropriate way.
These expectations are a blessing and a curse that hit every aspect of VR design. Why do some users feel discomfort and cyber-sickness when they’re driving a virtual car, for example? It’s for a lot of technical reasons, but ultimately, they’re all aspects driven by your expectations. What that experience looks, sounds and feels like in VR is probably not a 100% match with your expectations of driving or riding in a real-world car. Visually it might look like you’re driving a car, and sonically it might sound like being in a car. But as convincing as these elements might be, all the other sensory inputs that would come packaged along with that are missing. We don’t feel accelerations or decelerations. We don’t feel the movement of the car and the surface of the road through the seat and the steering assembly. We don’t feel the lateral G’s as we turn a corner at speed. Anyone who’s driven or ridden in a car will be familiar with a package of sensory inputs that collectively provide the experience of driving. I like to think of this as an ‘expectation model’, and as a designer, it’s vital to consider that every aspect of your virtual experience carries such a package of expectations about how that object, event or person is going to act and react for the user.
In the early days of hands-in-VR, we saw the first learnings of how users should interact with objects start to emerge. The obvious thought was that users should be able to pick up and manipulate virtual objects just like they do with objects in the real world, able to turn and examine them from every angle. Early VR work with objects often felt clumsy and unintuitive, users spending an unnatural amount of time getting an object held properly in our virtual hands in a way that, in real life, we would do naturally and without thinking. Anything with rudimentary finger location would have a terrifically difficult time mangling fingers against objects as users tried to respond and adjust to visual feedback that didn’t marry with their intentions or what their real hand was doing. A user reaching out to grab something wouldn’t get the result they’d come to expect from a lifetime of picking up and manipulating real-world objects.
So developers quickly adapted and learned to ‘snap’ objects into the user’s hands when grabbed, so that they would adopt a natural, useful orientation for the player to make use of the object. Grab a gun in a VR game, and these days it will always appear in your hand ready for use, pointing in the right direction and held in a way that fits with your expectations. Play a little with such an object and you’ll find the limitations and compromises that have been made so that your instant expectations of how it should look and feels to grab and hold a gun have been satisfied. With a real gun you could grab it by the barrel or the bottom of the stock, but most virtual guns will only let you hold them one way. While designers lost the flexibility the earlier systems promised in terms of letting you hold an object at any angle, they gained a much more immediate interfacing that felt more natural because it more closely, and immediately, satisfies the user’s immediate expectations.
And this applies to any interaction in VR – whether it’s meeting a character, sitting at a table, riding a vehicle, pushing a button or locomoting through the world – all of these activities carry a package of expectations for the user, and the VR designer’s responsibility is to deliver on as many of the most important elements of that package as possible. If the designer fails to capture the essence of those expectations, they can expect it to impact on the quality and level of immersion. The severity of that impact might be very minor in some cases (“I’m not seeing myself reflected in that shiny surface”), but it can also have a major effect in other places (“smooth artificial locomotion in VR makes me feel nauseous”), which can, at worse, be show-stoppers for your users and make them want to exit your experience.
Almost all of the comfort issues we can experience in VR are rooted, at their cause, in failing to satisfy our pre-existing expectation models. Every aspect of our immersion and sense of presence in the virtual world is linked inextricably to how successfully it’s various aspects address and satisfy our expectations.
- We’ll look deeper into some of these effects in Part 2 of this series. Stay Tuned!