Researchers at University of Southern California and Facebook track your facial expressions and transfer them into virtual reality with Oculus Rift hack.
Researchers at the Facebook’s Oculus division and University of Southern California have found a way to record the facial expressions of someone wearing a virtual-reality headset and later move them to a virtual character. That could make for much more rewarding socializing, work, or play in virtual worlds, because the expression of a virtual body double or otherworldly avatar could precisely imitate that of a real person’s face.
With a 3-D camera attached to the headset with a short boom, the system records the movement of a person’s mouth. Strain gauges that are added to the foam padding that fits the headset to the face are then used to measure the motions of the upper part of the face. Once the two data sources are merged, an exact 3-D representation of the user’s facial movements can be utilized to bring a virtual character to life, whether it is not a real person or something other than human.
Hao Li, an assistant professor at the University of Southern California who headed the project says that interacting and inhabiting in virtual worlds could be more obliging. He also says that “To get a virtual social environment, you want to convey this behavior to other people. This is the first facial tracking that has been demonstrated through a head-mounted display.” In 2013, MIT Technology Review had named Li as one of its Young Innovators.
Mark Zuckerberg, Facebook CEO had said that he is interested in the Oculus Rift headset that provides new ways to mix socially with others, although no details were made known by him. The social network had acquired Oculus last year.
Li said that the Oculus researchers that were engaged with him was exclusively as a research exercise; however, it would not be too difficult to improve the system they came up with and prepare a commercial product. “If people think this is really central to important killer applications, you could get it into production relatively quickly,” he says.
Philip Rosedale, who had earlier established Second Life and is currently CEO of a virtual worlds startup called High Fidelity says “This is really cool”. The company is working on making the realistic virtual social interaction possible by making use of webcams and other sensor technology to record arm and hand gestures and facial expressions.
Rosedale says that there is enough proof that making people’s avatars exhibiting their real-world body language helps them bring other people together in a virtual world. He is in agreement with Li that it should be possible to streamline the first proof of concept. For example, the camera could be combined with another on the bottom of the headset. Efforts by researchers and startups to put eye-tracking cameras on the inner side of the virtual reality headsets might provide an another possible way to assemble data on the upper part of the face.
The linchpin of Li’s system is software that can merge data from the sensors recording the upper and lower parts of the face and match the result onto a 3-D model of a face.
As of now, when you use the system for the first time the software needs you to go through a thorough fine-tuning process. First, wear a headset with the display part removed in front of a 3-D camera so that it get a full view of the face and then give your face muscles a 10-second workout by twist or bend them into a few different expressions. Later, give your face a workout for another 10 seconds after wearing the complete headset. The data gathered together helps the software find out how to correctly match the streams of data from your upper and lower face.
To eliminate that step, Li says he’s providing his software more data on different faces. He is also working on other techniques in order to make it simple to copy your real self into a virtual world. For creating 3-D replicas of people’s bodies and faces using conventional and 3-D cameras, many tools have already been developed in that respect. Li recently made a software that handles the more difficult task of making a realistic 3-D re-creation of a person’s hairstyle just by using only a single photo. This August, both the project and the face-tracking research will be presented at the Siggraph computer graphics conference in Los Angeles.
Resource : MIT Technology Reveiw