In the second part of this series, Immersive Experience Specialist Jed Ashforth delves deeper into the idea that expectations are the most vital key to understanding how to design for users in VR experiences.
In one early experiment at Sony PlayStation, we tested two groups of users on a virtual driving game, a very very early experiment using a version of DriveClub converted into VR. Years later, Drive Club would end up being one of the most challenging launch titles for PlayStationVR. This was our first signs of the challenges ahead.
We were testing how much more immersive the game felt in VR when played with a top-end force feedback steering wheel, compared to the way most players would encounter it, using the standard DualShock4 controller included with the PS4. What we found went against our early expectations and offered some fascinating insight. Rather than the experience being more comfortable and immersive, our sample group of Sony staff playing with the steering wheel noted a sharp increase in the levels of discomfort. Many users couldn’t make it round a full lap with the wheel, but could manage 3 or more laps no problem with the controller. This conundrum sparked a round of investigation and discovery that proved hugely useful in understanding user expectations and the effects they can have on the plausibility of a virtual experience.
Immersion and Comfort are the primary goals
In the first part of this series (link), we discussed how Expectations are one of the major components of any VR experience, and talked about the idea of Expectation Models in VR. These are the packages of sensory inputs and responses that we have learned to associate with doing activities and interactions in the real world. I believe they are a crucial consideration, because they can directly impact not only your degree of immersion, but also your level of comfort within a virtual experience. Both immersion and comfort are simultaneous primary goals for VR makers, understanding and predicting what your user’s expectations are of any given activity in VR is massively important. Understanding their expectations is the key to having a good understanding of how they will react to it.
Expectation Models are built and detailed as a result of any and all exposure to a real-life object, interaction, or event. The more familiar we are with a real-life experience, the stronger and more robust our expectation model becomes, until we’re hyper-sensitive to anything being amiss. Navigating and locomoting our bodies through our real world is something that almost all of us are intimately familiar with, because we’ve done it every day of our lives. That means that any solution for artificial locomotion in VR has a very tough time appearing authentic to users, because not all your expected visual and physical responses are happening, and your body and mind quickly tunes into there being something amiss.
Remember that many of your bodily systems rely on feedback mechanisms. The way we walk, for example, results in a constant loop of feedback from our vestibular systems in tandem with our visual feed, which allows us to correct our balance on-the-fly as we’re walking along. If we’re leaning, we can correct. If we stumble, we can catch ourselves. Without these input systems being in synch, we’ll quickly lose our normally reliable balance system. If you’ve ever tried walking blindfolded (no visual input) or drunk (vestibular confusion – and it’s probably not a good idea to test those things together!) then you’ll understand how much we rely on the combined information from both systems working in tandem in order to perform simple tasks like walking in a straight line or bend over to pick up a kid’s toy from the floor.
We can consider ourselves world-class experts in the experience of moving our own bodies around, and it’s hard to fool any expert when only the sounds and visuals are convincing, but everything else feels distinctly off.
Considered in this context, it’s not hard to see why emulating human locomotion in VR is so damn tricky to pull off. Our expectations of what that should look and feels like are very well defined with a lifetime’s worth of prior experiences. We can consider ourselves world-class experts in the experience of moving our own bodies around, and it’s hard to fool any expert when only the sounds and visuals are convincing, but everything else feels distinctly off.
Screwing the Locomotion
And it’s not just locomotion – EVERY action you do in VR has some basis in a real life activity. Everything can be considered to have an expectation package attached to it, a manifest of sensory inputs that your brain needs to tick off to confirm you are seeing what you think you are seeing, and doing what you think you are doing. It becomes a problem when not everything is working as expected.
This is why VR locomotion can’t be regarded as a problem that can be ‘solved’ by development teams. There are measures that can make it more comfortable for sure, such as the various comfort options you’ll see in many modern day VR experiences. But the underlying cause is simply a by-product of the way our bodies work, and our physiology is designed this way. The system is working fine, it’s just that the quality and fidelity of the fake info we’re feeding in isn’t complete and robust enough to fool our systems completely. It’s natural, and it’s inevitable that anything we’re really familiar with in real life is going to be hard to fool us with in VR.
A general guideline is that if you’re familiar enough to do a thing on ‘autopilot’ in real life, it’s going to be hard for VR pull it off convincingly for you.
As social humans, we’re all experts at looking at and interacting with other people. This is another area where crossing the uncanny valley in VR is going to be challenging, at least in the short term until we can make believable virtual people.
And this is the same challenge that any familiar activity is going to face in it’s journey to VR.
Driving is another example – it’s a familiar activity that your body and mind trains itself to do almost without any thought. Experienced drivers often enter an autopilot, or state of ‘flow’ during the drive. You realise it when you get to the end of the journey safely but hardly remember any of the details. Our brains are pretty cool at letting us get on with familiar behaviors without having to dedicate too much conscious thought to them. Like a self-driving car or a jumbo jet, you really do want your brain to alert you straight away when you need to take over from the autopilot. In those scenarios it’s clearly a good thing to make your spider-senses tingle so you can focus all your conscious attention to react to changes from the norm, like the car in front suddenly hard-braking. It’s a good thing we’ve evolved to have these skills and functions – but it means VR is constantly in danger of your brain drawing attention to things that aren’t quite right.
Making a driving experience in VR can be very hard to pull off without causing a mismatch between what the user is virtually experiencing and what they’re expecting from real life experience, because those expectations are so rich and detailed and robust. Humans can be sensitive enough to recognise their seat position or mirrors are ever-so-slightly not-quite-right. It’s easy to understand why we can also be highly sensitive to a virtual drive where whole categories of sensory info are absent or aberrant.
And yet there are lots of VR driving games out there that players love and that don’t cause those users untenable amounts of discomfort – so how can that be?
I promise we’ll come back to that, but before we answer that question, let’s look at another example.
If you’re an experienced tennis player in real life, it might be harder for you to become immersed in a VR tennis game,l while the next guy finds it super-immersive and highly believable. Maybe that next guy played some tennis as a kid, has watched a few Wimbledon finals on TV. Maybe he played a tennis videogame or two over the years. But as a really experienced player, your greater familiarity with the real life version is going to carry a more detailed slate of expectations for what tennis looks and feels like. The other guy probably has one that’s much easier to convincingly satisfy.
The Plausibility Illusion
And this is a key consideration for anyone working in VR – the most immersive moments in virtual reality are those where you’re not stopping to question the validity and realism of what you’re doing. In VR science this is called the Plausibility Illusion, and it’s easier to achieve when the action or activity is less familiar to the user.
So when you’re driving a virtual car or playing virtual tennis or doing just about anything else in VR, picture that your subconscious mind is evaluating all the sensory inputs it’s getting. It’s weighing them and measuring them against the relevant expectation models it has of those experiences, to test their plausibility and validity. Those expectations can come from a whole bunch of places, not just personal experience. If you’re playing a virtual tennis match and your memories of real-life tennis are distant, and colored by representation in other media, it may not take much for it to be plausible.
As long as the racket feels and behaves like the racket you remember holding as a kid, as long as it sounds like it does on TV when you thwack it, as long as the ball behaves and bounces like you’d expect, then that might be enough to look and feel plausible enough to fool your brain. Yep, that’s the same immersion effect that worked for Wii Sports. Remember, it’s exponentially richer and more plausible with the first-person VR viewpoint and accurate 1:1 controller tracking.
So the less familiar an activity is, the easier it is to plausibly reproduce in VR!
That’s one reason why some of those driving game enthusiasts who love VR driving are able to find it so immersive when others might find it causes them discomfort. If you play a lot of driving games, you will have developed a rich set of expectations over time. Not only about driving a car, perhaps, but also about driving a videogame car. You can frame the VR experience within a broader, more forgiving set of parameters than someone would who is only familiar with real world driving could.
And another reason is that the context that you place that experience in can be heightened and influenced by how you interface with the experience. Remember our early DriveClub experiment and the majority of users who found playing with the PlayStation controller to be more immersive and comfortable than the much-more-true-to-life steering wheel controller? What we came to realise was that the interface you have with the experience is hugely informative when your brain is trying to contextualise the experience and match it against other activities you’ve experienced. Having a game controller in your hand while in VR is a strong signifier that you’re playing a videogame, and that what you’re seeing and hearing is not-real.
Framing the experience as a videogame invokes a comparison with a different set of expectations. Videogames being as broad and multifarious as they are, this frame of comparison is far more flexible and accommodating.
This truth was borne out in the responses players gave in the DriveClub interviews. We saw a trend of users describing the controller-based experience as ‘the coolest driving game I’ve ever played’. But the word ‘game’ was conspicuously absent when talking about the exact same race, but played instead with a steering wheel and pedals. In fact, the responses were all variations on how the experience was ‘not like driving a real car’. Simply changing the input device changed the whole context of the experience for users, and that directly affected their comfort levels.
What we went on to find was that there were other ways of abstracting away from realism, that also seemed to help users re-frame the experience to positive effect. There are a slew of techniques and tricks that can be used to do this, more often than not without damaging the plausibility of the illusion.
The designer’s goal in VR is most often to achieve sustained immersion for the user; and using abstraction techniques can be a good way to ensure sustained immersion and help avoid unsatisfied expectations from crashing into the reality of your experience. In the next article in this series, we’ll take a look at ways we can do just that.