Meta Quest: Meta presents a full body simulation in motion with a Quest, towards more real avatars?

The animation of our avatars is not easy and, very often, like those designed by Metathe latter are devoid of legs to avoid animations that are really far from real, breaking the immersion.

If you have practiced virtual reality for a long time, you must have realized that multiplayer experiences involving virtual avatars of people are, most of the time, quite fun to watch. The step of our colleagues all in pixel is hilarious and very far from representing the real movements of our legs, if not a very approximate animation.

Like the avatars of Metawhich moreover suffered strong criticism following the announcement ofHorizon Worldsthe famous metaverse of the giant, we are used to rubbing shoulders with virtual busts, a wise choice not to break the immersion too much with clearly unrealistic animated legs. Nevertheless, the players claim loud and clear the arrival of legs in virtual reality to represent us as a whole. Solutions exist well and truly to reinforce immersion with the help of trackers that we position on our shoes or legs, like the VIVE Trackers for example, for a much more believable result. Our lower limbs are thus very well modeled. However, these solutions are not implemented in all experiments and represent a cost for consumers in addition to a sometimes complicated installation.

Meta understood this well and is working hard (do you have it?) to find viable solutions that best simulate our body as a whole. This is Andrew Bosworth, technical director of Meta, who confirmed it and today we have a glimpse of what he had in mind. The video above presents what Meta Reality Labsresearch department at Metais experimenting. It is a question of simulating in real time the movements of the human body in order to give rise to immersive interactive experiences in AR/VRwithout tracking external.

We see there a man wearing a Meta Quest 2 moving around a room without others trackers than his headset and controllers at first, then only his helmet in a second time. The modeled avatar of this man is animated based on data taken in space, captured by the Quest 2’s cameras and its controllers, coupled with plausible body movements. After training the animated model, coupled with field constraints, the magic happens and it’s quite amazing! Everything works, we recall, without any observation of the lower body.

The complete description of the concept is to be found here. And you what do you think ? Is it convincing enough to be implemented on our avatars?

source site-121