Inspiration

We were inspired by Meta's Movement SDK ! For years the team at Agile Lens has toiled under the weight of a monstrous mocap tech stack of pucks, optical devices, inertial suits, head-mounted-cameras and head-mounted-phones and always dreamed of something simpler. Now, armed with only a Quest Pro, a performer can don the role of just about any avatar and experience live performance capture of the eyes, face, hands, and even body. We wanted to see where that launchpad would take us.

Oh and ditto with Shared Spatial Anchors and the possibilities of simple mixed reality multiplayer co-presence.

What it does

(Want to listen to this post instead of read it? Check this out )

Like all good live shows, you start in a lobby, except this one modifies your actual surroundings to fit our particular 1920s Prohibition / Steampunk aesthetic. From there you get to sample the characters available to perform as and "host" a new show. Once your show is ready, the lobby fades away to show more of your surroundings (now modifies to look more like a theatre), and now anyone in the same physical location as you is able to join your show from that same original lobby (it should even look the same if you've all scanned the same space!). As an audience member, you get to see the actual performer side by side with their digital counterparts-- seeing what they're actually doing compared to what's happening in mixed reality only adds to the magic. And what is the performer doing? It could be as simple as singing a song or acting out a play alongside an avatar matching their face and body, or as wild as setting up a chorus of desks to sing while they perform interpretive dance, periodically freezing poses that capture a moment in digital perpetuity.

How we built it

We gathered the finest minds at Agile Lens: Immersive Design, started with kit-bashing existing SDKs, got creative with them, supported each other, triaged tasks, rapidly iterated, worked real hard, and never gave up !

Challenges we ran into

  1. UNREAL ENGINE vs UNITY3D - There's a lot of Meta features on the Unity side that we'd love to utilize but they just don't exist for Unreal Engine yet. Meta Avatars SDK, Interaction SDK, Quick Action Menu to name a few. Also there's just much better documentation for Unity as well as more sample projects :(
  2. CO-LOCATION SAMPLE - It's missing crucial information. We figured a lot out on our own that should have been in the documentation, for example that you need to have an an app on App Lab channel for it to work, that different accounts need to be used for the same session, that there's a 'data usage checkup,' that you need to include the Meta Platform SDK plugin. And even if you do all those things, the existing logic in the sample doesn't work without significant adjustment.
  3. NO METAHUMANS DOCUMENTATION FOR MOVEMENT SDK - We're in Unreal so of course we want to use MetaHumans. There isn't a MetaHuman sample in the Movement SDK, just a retargeter, and there is no documentation on how to set it up. We got it to work, but it would have been faster if there were instructions.
  4. NETWORKING - Replicating Live Link (the face and body data) is a nightmare and a half. Also for general multiplayer we couldn't find each other on the Meta office network or on our phone hotspots. Eventually figured out a way to host a network from a laptop and that worked, though it has a maximum of 8 connections.
  5. OPENXR SCENE API - using the Mixed Reality Utility Kit Scene Actor Component mostly works for standalone headsets with a well-defined room environment, but seems to insist on spawning random objects (BP Anchor Actor Spawner). Particularly frustrating for testing on PC VR since you can't set up a room so it always spawns random objects.
  6. QUEST PRO - we want to use Quest Pro because of the face and eye tracking but it really struggles compared to the power of the Quest 3.

Accomplishments that we're proud of

  1. CO-LOCATED MULTIPLAYER - Entirely virtual shows are great for remote accessibility, but there's always a special magic to experiencing a live show together in the same room with others. That meant that in addition to the regular ol' challenges of any multiplayer experience, we needed to tackle a white whale of a problem-- broadcasting live body data with hundreds of position and rotations and float values to pass in real-time to audience members. Our solution is a little hacky (ahem), but it's working. Our particular implementation even gave us a good post-rationalization for a 1920s Steampunk aesthetic.
  2. CUSTOM CHARACTERS - Armed with a litany of technical challenges, it was tempting to not get too artistic with our project and we could have stuck with stock avatars. Not only do we create custom MetaHumans and outfits, but Dante on our team modeled and rigged our Deer God and Paper Dragon characters completely from scratch from within VR using Medium !
  3. STRIKE A POSE - There's something so satisfying about the ability to strike poses, play a game of Freeze, or give yourself an army of dead-eyed audience members. It didn't work and didn't work and didn't work then finally worked. Turned out that on standalone, skeletal meshes were the enemy and poseable meshes (though not via any of the code we expected) was the answer.
  4. WEB BROWSER ACCESS - After puzzling over various ways to bring in all the items a performer could want (e.g. dialogue scripts, songs, videos, et al) through complicated streaming APIs, the idea occurred to us: what if we just had a browser that you could grab and move around your scene? Then you could go to anything from a Google Doc to a YouTube video to aid you in giving the best performance ever. By golly we did it.
  5. BRINGING SCENE OBJECTS TO LIFE - This absolutely started as a joke and turned into something special. We can't believe we managed to map face tracking to a Desk, which gets located at the spot of an actual table in your scene. It feels great and there's nothing quite like seeing a Shakespearean soliloquy coming out of it.

What we learned

With a solid foundation of SDKs, there's far more opportunity to be creative with building something truly special than when you need to build out everything from co-presence to body tracking to mixed reality scene understanding from the ground up. Standing on the shoulders of giants is better and allows us to reach much farther.

What's next for PerfoMR

  1. FEASIBILITY STUDIES - We believe the PerfoMR platform has tremendous potential for everyone from dorm room divas to VTubers to the biggest artists working today, and we want to validate that belief alongside professional performers and educators. We'd love the opportunity to work on some case studies to see what they love, what is clutter, and of course what features they'd love to see us build next.
  2. PLAYBACK - Ability to record a session and play it back ! Incredible potential for helping live shows live on, on-demand while still making you feel present like you're right there... especially when 'right there' happened at your actual present location.
  3. EXPAND AUDIENCE - We want PerforMR to be an MR-first platform, but if you can't go to a live show we want to bring the live show to your living room. Playing more with scale (tabletop shows!) and scene understanding to help the digital fit seamlessly in with the real while enhancing all of its capabilities.
  4. MORE CHARACTERS - We're just getting started! We want characters and costumes for all occasions. And not only humanoids-- performing as a desk is an unexpected delight and the potential to not just inhabit humanoid characters but all sorts of objects is beyond thrilling. Before long you'll be performing as the house from Disney's Encanto.
  5. OPTIMIZE TO HIGH HEAVEN - plenty of jenky code held together with duct tape, particularly in our multiplayer system using a number of systems (including OSC!?). We want even more responsive scene understanding and the capability to broadcast a performance to hundreds and hundreds of audience members around the world.
  6. QUEST PRO 2 ? - If the Quest Pro 2 ends up having face and eye tracking but more power, we'll have the perfect device to recommend to all our performers.

Built With

Share this project:

Updates