Frequently Asked Questions

What is the Meshcapade platform?

We have combine computer vision, graphics and machine learning to provide an avatar-as-a-service platform that enables the avatar layer underlying e-commerce and the metaverse.

The platform can create realistic, accurate 3D avatars of humans from any source of data, including:

  • photos
  • videos
  • body measurements
  • 3D scans
  • motion capture
  • text

What is the SMPL codec

Our platform is powered by our generative model, SMPL. And SMPL is the foundational codec layer of the metaverse.

It enables us to create neural networks and machine learning models that can convert any form of limited input data in to digital 3D humans.

How is Meshcapade different from other avatar solutions?

We are building the avatar infrastructure layer that enables everyone else, including all other avatar companies to do a lot more.

Meshcapade is solving the computer vision problem of bringing people’s physical reality and their real motion into digital spaces. We want to enable other businesses like game-ready or social-media avatar solutions and apparel companies alike.

Most avatar solutions have created a layer of game-ready stylized avatar features (hair types, face types, body types etc.). These are all created manually by artists. With these layers, they can now use a sort of linear model to create new character variations. However, this kind of data cannot be used for machine learning models, and especially cannot be used as a generative model for creating realistic digital characters.

In summary, these systems are designed for a limited range of game-character variations.

Our platform uses SMPL to power our creation tools. SMPL is a generative model trained on millions of real humans, and can scale to recreate any human body on the planet. So we envision gaming and social-media avatar companies to use our SMPL avatars as a codec layer for their characters. The stylized avatar features can simply be layered on top of the SMPL avatar so that they scale with SMPL to create any body shape, any body pose, directly from input like photos, video or even text!

Think of the stylized avatar features of any of the other avatar companies as a “skin” that users can add on top of the SMPL avatar codec.

And on the other end, many apparel companies are creating their clothing that is built to drape on SMPL avatars. With SMPL as the codec, a user should be able to create their base avatar even within one gaming universe, then use it at their favorite apparel store to shop for their real body size, because the codec layer will still have the physical attributes of the real person behind the avatar. And if the gaming company and apparel store want to eventually play in the same virtual spaces, the connection through our codec layer helps the apparel store clothe SMPL avatars with the game-avatar skins as well.

What about neural models?

Soon most stylized and realistic avatar features for digital use will be replaced with neural models. There’s still a lot of work to be done in the neural representation space so it’s all mostly academic right now.

But even the neural representations of clothing or features will need to be layered onto a generative model. SMPL is already the de-facto standard avatar layer for neural models as well.

Is it more important to have the ability to generate life-like avatars or the animation behind them?


For avatars to serve as a stand-in for our real physical selves in the digital world, they need to not only scale to faithfully recreate the body shape of every person on the planet, but also generate the nuances of our body movements including facial expressions, soft-tissue deformation and hand pose.

What is the most exciting impact of Meshcapade’s platform?

The interoperability between different spaces is the toughest and most interesting nut to crack here. With the kind of interoperability we envision, we will unlock a possibility of online interaction that people don’t even think possible today.

We are already starting to make an impact on the apparel spaces, all organic. We’ve started to see a network effect in this where 3D scanning, fitness and apparel spaces are able to communicate and connect more easily because they are solving their avatar needs through our platform. They are all finally speaking the same language, exchanging files in the same, expected SMPL codec format.

It’s like the PDF effect. Before PDFs, people were sharing word docs, rtf files, etc. These would always open with different formatting on each system, and it was a nightmare for tables and figures. With PDFs, everyone knew they were getting the same single format of file. You can add on additional pages to the PDF, fill it in, sign it, etc. and when you send it further on, it’s still in the same format.